Goto

Collaborating Authors

 disentangled behavioural representation


Disentangled behavioural representations

Neural Information Processing Systems

Individual characteristics in human decision-making are often quantified by fitting a parametric cognitive model to subjects' behavior and then studying differences between them in the associated parameter space. However, these models often fit behavior more poorly than recurrent neural networks (RNNs), which are more flexible and make fewer assumptions about the underlying decision-making processes. Unfortunately, the parameter and latent activity spaces of RNNs are generally high-dimensional and uninterpretable, making it hard to use them to study individual differences. Here, we show how to benefit from the flexibility of RNNs while representing individual differences in a low-dimensional and interpretable space. To achieve this, we propose a novel end-to-end learning framework in which an encoder is trained to map the behavior of subjects into a low-dimensional latent space. These low-dimensional representations are used to generate the parameters of individual RNNs corresponding to the decision-making process of each subject. We introduce terms into the loss function that ensure that the latent dimensions are informative and disentangled, i.e., encouraged to have distinct effects on behavior. This allows them to align with separate facets of individual differences. We illustrate the performance of our framework on synthetic data as well as a dataset including the behavior of patients with psychiatric disorders.


Reviews: Disentangled behavioural representations

Neural Information Processing Systems

After rebuttal: I focused my comments and attention on the utility of this method on providing behavioral representations, whereas R3 and the authors drew my attention to the novelty of the separation loss, and their specific intent to primarily model _individual decision-making processes_, and not behavior more generally. The text could use clarification on this point in several places. I still do think that existing PGM-based approaches with subject-level random variables are a fair baseline to compare against, since they do create latent embeddings of behavior on a per-subject basis (with e.g. I find the originality low in the context of that prior work, including Emily Fox's work on speaker diarization and behavioral modeling, Matthew Johnson's recent work on structured latent space variational autoencoders, and others. Given the input of the model is a bag of low-dimensional sequences, probabilistic graphical models are an appropriate baseline here.


Disentangled behavioural representations

Neural Information Processing Systems

Individual characteristics in human decision-making are often quantified by fitting a parametric cognitive model to subjects' behavior and then studying differences between them in the associated parameter space. However, these models often fit behavior more poorly than recurrent neural networks (RNNs), which are more flexible and make fewer assumptions about the underlying decision-making processes. Unfortunately, the parameter and latent activity spaces of RNNs are generally high-dimensional and uninterpretable, making it hard to use them to study individual differences. Here, we show how to benefit from the flexibility of RNNs while representing individual differences in a low-dimensional and interpretable space. To achieve this, we propose a novel end-to-end learning framework in which an encoder is trained to map the behavior of subjects into a low-dimensional latent space.


Disentangled behavioural representations

Dezfouli, Amir, Ashtiani, Hassan, Ghattas, Omar, Nock, Richard, Dayan, Peter, Ong, Cheng Soon

Neural Information Processing Systems

Individual characteristics in human decision-making are often quantified by fitting a parametric cognitive model to subjects' behavior and then studying differences between them in the associated parameter space. However, these models often fit behavior more poorly than recurrent neural networks (RNNs), which are more flexible and make fewer assumptions about the underlying decision-making processes. Unfortunately, the parameter and latent activity spaces of RNNs are generally high-dimensional and uninterpretable, making it hard to use them to study individual differences. Here, we show how to benefit from the flexibility of RNNs while representing individual differences in a low-dimensional and interpretable space. To achieve this, we propose a novel end-to-end learning framework in which an encoder is trained to map the behavior of subjects into a low-dimensional latent space.